# Pre-trained language models
Italian Legal BERT
A legal domain-specific model based on the Italian XXL BERT model, with additional pre-training on 3.7GB of preprocessed text from the National Judicial Archive
Large Language Model
Transformers Other

I
dlicari
1,511
20
Vihealthbert Base Word
ViHealthBERT is a pre-trained language model for Vietnamese health text mining, providing strong baseline performance in the healthcare domain
Large Language Model
Transformers

V
demdecuong
633
5
Araelectra Base Artydiqa
An Arabic Wikipedia Q&A system based on AraELECTRA, specifically designed for Arabic reading comprehension tasks
Question Answering System
Transformers Arabic

A
wissamantoun
86
11
Kogpt2
KoGPT2 is a Korean generative pre-trained model based on the Huggingface Transformers framework, developed and open-sourced by SKT-AI.
Large Language Model
Transformers

K
taeminlee
1,978
2
Cdlm
Apache-2.0
CDLM is a pre-trained model focused on cross-document language modeling, capable of processing semantic relationships between multiple documents.
Large Language Model
Transformers English

C
biu-nlp
131
1
Multi Dialect Bert Base Arabic
A multi-dialect BERT model initialized with Arabic-BERT and trained on 10 million Arabic tweets, supporting identification of various Arabic dialects
Large Language Model Arabic
M
bashar-talafha
357
8
Featured Recommended AI Models